|
In econometrics, the generalized method of moments (GMM) is a generic method for estimating parameters in statistical models. Usually it is applied in the context of semiparametric models, where the parameter of interest is finite-dimensional, whereas the full shape of the distribution function of the data may not be known, and therefore maximum likelihood estimation is not applicable. The method requires that a certain number of ''moment conditions'' were specified for the model. These moment conditions are functions of the model parameters and the data, such that their expectation is zero at the true values of the parameters. The GMM method then minimizes a certain norm of the sample averages of the moment conditions. The GMM estimators are known to be consistent, asymptotically normal, and efficient in the class of all estimators that don’t use any extra information aside from that contained in the moment conditions. GMM was developed by Lars Peter Hansen in 1982 as a generalization of the method of moments which was introduced by Karl Pearson in 1894. Hansen shared the 2013 Nobel Prize in Economics in part for this work. == Description == Suppose the available data consists of ''T'' observations , where each observation ''Yt'' is an ''n''-dimensional multivariate random variable. We assume that the data come from a certain statistical model, defined up to an unknown parameter . The goal of the estimation problem is to find the “true” value of this parameter, ''θ''0, or at least a reasonably close estimate. A general assumption of GMM is that the data ''Yt'' be generated by a weakly stationary ergodic stochastic process. (The case of independent and identically distributed (iid) variables ''Yt'' is a special case of this condition.) In order to apply GMM, we need to have "moment conditions", i.e. we need to know a vector-valued function ''g''(''Y'',''θ'') such that : where E denotes expectation, and ''Yt'' is a generic observation. Moreover, the function ''m''(''θ'') must differ from zero for , or otherwise the parameter ''θ'' will not be point-identified. The basic idea behind GMM is to replace the theoretical expected value E() with its empirical analog — sample average: : and then to minimize the norm of this expression with respect to ''θ''. The minimizing value of ''θ'' is our estimate for ''θ''0. By the law of large numbers, for large values of ''T'', and thus we expect that . The generalized method of moments looks for a number which would make as close to zero as possible. Mathematically, this is equivalent to minimizing a certain norm of (norm of ''m'', denoted as ||''m''||, measures the distance between ''m'' and zero). The properties of the resulting estimator will depend on the particular choice of the norm function, and therefore the theory of GMM considers an entire family of norms, defined as : where ''W'' is a positive-definite weighting matrix, and ''m′'' denotes transposition. In practice, the weighting matrix ''W'' is computed based on the available data set, which will be denoted as . Thus, the GMM estimator can be written as : Under suitable conditions this estimator is consistent, asymptotically normal, and with right choice of weighting matrix also asymptotically efficient. 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「generalized method of moments」の詳細全文を読む スポンサード リンク
|